DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.
Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve:
The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to identify projects most likely to need further review before approval.
The train.csv data set provided by DonorsChoose contains the following features:
| Feature | Description |
|---|---|
project_id |
A unique identifier for the proposed project. Example: p036502 |
project_title |
Title of the project. Examples:
|
project_grade_category |
Grade level of students for which the project is targeted. One of the following enumerated values:
|
project_subject_categories |
One or more (comma-separated) subject categories for the project from the following enumerated list of values:
Examples:
|
school_state |
State where school is located (Two-letter U.S. postal code). Example: WY |
project_subject_subcategories |
One or more (comma-separated) subject subcategories for the project. Examples:
|
project_resource_summary |
An explanation of the resources needed for the project. Example:
|
project_essay_1 |
First application essay* |
project_essay_2 |
Second application essay* |
project_essay_3 |
Third application essay* |
project_essay_4 |
Fourth application essay* |
project_submitted_datetime |
Datetime when project application was submitted. Example: 2016-04-28 12:43:56.245 |
teacher_id |
A unique identifier for the teacher of the proposed project. Example: bdf8baa8fedef6bfeec7ae4ff1c15c56 |
teacher_prefix |
Teacher's title. One of the following enumerated values:
|
teacher_number_of_previously_posted_projects |
Number of project applications previously submitted by the same teacher. Example: 2 |
* See the section Notes on the Essay Data for more details about these features.
Additionally, the resources.csv data set provides more data about the resources required for each project. Each line in this file represents a resource required by a project:
| Feature | Description |
|---|---|
id |
A project_id value from the train.csv file. Example: p036502 |
description |
Desciption of the resource. Example: Tenor Saxophone Reeds, Box of 25 |
quantity |
Quantity of the resource required. Example: 3 |
price |
Price of the resource required. Example: 9.95 |
Note: Many projects require multiple resources. The id value corresponds to a project_id in train.csv, so you use it as a key to retrieve all resources needed for a project:
The data set contains the following label (the value you will attempt to predict):
| Label | Description |
|---|---|
project_is_approved |
A binary flag indicating whether DonorsChoose approved the project. A value of 0 indicates the project was not approved, and a value of 1 indicates the project was approved. |
# Note - several code snippets have been used from the following link: https://colab.research.google.com/drive/1EkYHI-vGKnURqLL_u5LEf3yb0YJBVbZW
# This link was provided by the Appliedai team to answer a question about data leakage
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from tqdm import tqdm
import os
import plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
from collections import Counter
#save matplotlib default parameters. When testing I found that the plot changed after running seaborn heatmap
import matplotlib as mpl
inline_rc = dict(mpl.rcParams)
#Use all records and test running time
project_data = pd.read_csv('train_data.csv')
resource_data = pd.read_csv('resources.csv')
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
# how to replace elements in list python: https://stackoverflow.com/a/2582163/4084039
cols = ['Date' if x=='project_submitted_datetime' else x for x in list(project_data.columns)]
#sort dataframe based on time pandas python: https://stackoverflow.com/a/49702492/4084039
project_data['Date'] = pd.to_datetime(project_data['project_submitted_datetime'])
project_data.drop('project_submitted_datetime', axis=1, inplace=True)
project_data.sort_values(by=['Date'], inplace=True)
# how to reorder columns pandas python: https://stackoverflow.com/a/13148611/4084039
project_data = project_data[cols]
project_data.head(2)
print("Number of data points in train data", resource_data.shape)
print(resource_data.columns.values)
resource_data.head(2)
project_subject_categories¶catogories = list(project_data['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)
project_subject_subcategories¶sub_catogories = list(project_data['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)
# merge two column text dataframe:
project_data["essay"] = project_data["project_essay_1"].map(str) +\
project_data["project_essay_2"].map(str) + \
project_data["project_essay_3"].map(str) + \
project_data["project_essay_4"].map(str)
project_data.head(2)
# printing some random reviews
print(project_data['essay'].values[0])
print("="*50)
print(project_data['essay'].values[150])
print("="*50)
print(project_data['essay'].values[1000])
print("="*50)
print(project_data['essay'].values[20000])
print("="*50)
print(project_data['essay'].values[99999])
print("="*50)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent = decontracted(project_data['essay'].values[20000])
print(sent)
print("="*50)
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
print(sent)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= {'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"}
# Combining all the above stundents
from tqdm import tqdm
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['essay'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_essays.append(sent.lower().strip())
# after preprocesing
preprocessed_essays[20000]
project_data['essay'] = preprocessed_essays
#words in essays
project_data['words_in_essay'] = project_data['essay'].str.split().apply(len).value_counts()
project_data['words_in_essay'].fillna(0, inplace=True)
# similarly you can preprocess the titles also
preprocessed_titles = []
for sentance in tqdm(project_data['project_title'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopwords)
preprocessed_titles.append(sent.lower().strip())
# after preprocesing
preprocessed_titles[1000]
project_data['project_title'] = preprocessed_titles
#words in title
project_data['words_in_title'] = project_data['project_title'].str.split().apply(len).value_counts()
project_data['words_in_title'].fillna(0, inplace=True)
#uunique values:
#array(['Grades PreK-2', 'Grades 9-12', 'Grades 6-8', 'Grades 3-5'],
# dtype=object)
#preprocess project_grade_category for CountVectorizer
project_data['project_grade_category'] = project_data['project_grade_category'].str.replace(' ', '_')
project_data['project_grade_category'] = project_data['project_grade_category'].str.replace('-', '_')
#https://stackoverflow.com/questions/13842088/set-value-for-particular-cell-in-pandas-dataframe-using-index
# In [18]: %timeit df.set_value('C', 'x', 10)
# 100000 loops, best of 3: 2.9 µs per loop
# In [20]: %timeit df['x']['C'] = 10
# 100000 loops, best of 3: 6.31 µs per loop
# In [81]: %timeit df.at['C', 'x'] = 10
# 100000 loops, best of 3: 9.2 µs per loop
import nltk
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# import nltk
# nltk.download('vader_lexicon')
sid = SentimentIntensityAnalyzer()
project_data['neg'] = 0.0
project_data['neu'] = 0.0
project_data['pos'] = 0.0
project_data['compound'] = 0.0
for index, row in project_data.iterrows():
ss = sid.polarity_scores(row['essay'])
project_data.set_value(index, 'neg', ss['neg'])
project_data.set_value(index, 'neu', ss['neu'])
project_data.set_value(index, 'pos', ss['pos'])
project_data.set_value(index, 'compound', ss['compound'])
# we can use these 4 things as features/attributes (neg, neu, pos, compound)
# neg: 0.0, neu: 0.753, pos: 0.247, compound: 0.93
project_data[['neg','neu','pos','compound']].head()

# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
from sklearn.model_selection import train_test_split
X = project_data.drop(['project_is_approved'], axis=1)
y = project_data['project_is_approved'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify=y, random_state = 123)
X_train, X_cv, y_train, y_cv = train_test_split(X_train, y_train, test_size=0.33, stratify=y_train, random_state = 123)
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
X_train = pd.merge(X_train, price_data, on='id', how='left')
X_cv = pd.merge(X_cv, price_data, on='id', how='left')
X_test = pd.merge(X_test, price_data, on='id', how='left')
from sklearn.preprocessing import Normalizer
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['price'].values.reshape(1, -1))
X_train_price_norm = normalizer.transform(X_train['price'].values.reshape(1, -1))
X_cv_price_norm = normalizer.transform(X_cv['price'].values.reshape(1, -1))
X_test_price_norm = normalizer.transform(X_test['price'].values.reshape(1, -1))
X_train_price_norm = X_train_price_norm.reshape(-1,1)
X_cv_price_norm = X_cv_price_norm.reshape(-1,1)
X_test_price_norm = X_test_price_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_price_norm.shape, y_train.shape)
print(X_cv_price_norm.shape, y_cv.shape)
print(X_test_price_norm.shape, y_test.shape)
print("="*100)
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['quantity'].values.reshape(1, -1))
X_train_quantity_norm = normalizer.transform(X_train['quantity'].values.reshape(1, -1))
X_cv_quantity_norm = normalizer.transform(X_cv['quantity'].values.reshape(1, -1))
X_test_quantity_norm = normalizer.transform(X_test['quantity'].values.reshape(1, -1))
X_train_quantity_norm = X_train_quantity_norm.reshape(-1,1)
X_cv_quantity_norm = X_cv_quantity_norm.reshape(-1,1)
X_test_quantity_norm = X_test_quantity_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_quantity_norm.shape, y_train.shape)
print(X_cv_quantity_norm.shape, y_cv.shape)
print(X_test_quantity_norm.shape, y_test.shape)
print("="*100)
#X_train_price_standardized
#X_test_price_standardized
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))
X_train_previously_posted_projects_norm = normalizer.transform(X_train['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))
X_cv_previously_posted_projects_norm = normalizer.transform(X_cv['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))
X_test_previously_posted_projects_norm = normalizer.transform(X_test['teacher_number_of_previously_posted_projects'].values.reshape(1, -1))
X_train_previously_posted_projects_norm = X_train_previously_posted_projects_norm.reshape(-1,1)
X_cv_previously_posted_projects_norm =X_cv_previously_posted_projects_norm.reshape(-1,1)
X_test_previously_posted_projects_norm = X_test_previously_posted_projects_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_previously_posted_projects_norm.shape, y_train.shape)
print(X_cv_previously_posted_projects_norm.shape, y_cv.shape)
print(X_test_previously_posted_projects_norm.shape, y_test.shape)
print("="*100)
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['neu'].values.reshape(1, -1))
X_train_neu_norm = normalizer.transform(X_train['neu'].values.reshape(1, -1))
X_cv_neu_norm = normalizer.transform(X_cv['neu'].values.reshape(1, -1))
X_test_neu_norm = normalizer.transform(X_test['neu'].values.reshape(1, -1))
X_train_neu_norm = X_train_neu_norm.reshape(-1,1)
X_cv_neu_norm =X_cv_neu_norm.reshape(-1,1)
X_test_neu_norm = X_test_neu_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_neu_norm.shape, y_train.shape)
print(X_cv_neu_norm.shape, y_cv.shape)
print(X_test_neu_norm.shape, y_test.shape)
print("="*100)
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['neg'].values.reshape(1, -1))
X_train_neg_norm = normalizer.transform(X_train['neg'].values.reshape(1, -1))
X_cv_neg_norm = normalizer.transform(X_cv['neg'].values.reshape(1, -1))
X_test_neg_norm = normalizer.transform(X_test['neg'].values.reshape(1, -1))
X_train_neg_norm = X_train_neg_norm.reshape(-1,1)
X_cv_neg_norm =X_cv_neg_norm.reshape(-1,1)
X_test_neg_norm = X_test_neg_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_neg_norm.shape, y_train.shape)
print(X_cv_neg_norm.shape, y_cv.shape)
print(X_test_neg_norm.shape, y_test.shape)
print("="*100)
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['pos'].values.reshape(1, -1))
X_train_pos_norm = normalizer.transform(X_train['pos'].values.reshape(1, -1))
X_cv_pos_norm = normalizer.transform(X_cv['pos'].values.reshape(1, -1))
X_test_pos_norm = normalizer.transform(X_test['pos'].values.reshape(1, -1))
X_train_pos_norm = X_train_pos_norm.reshape(-1,1)
X_cv_pos_norm =X_cv_pos_norm.reshape(-1,1)
X_test_pos_norm = X_test_pos_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_pos_norm.shape, y_train.shape)
print(X_cv_pos_norm.shape, y_cv.shape)
print(X_test_pos_norm.shape, y_test.shape)
print("="*100)
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['compound'].values.reshape(1, -1))
X_train_compound_norm = normalizer.transform(X_train['compound'].values.reshape(1, -1))
X_cv_compound_norm = normalizer.transform(X_cv['compound'].values.reshape(1, -1))
X_test_compound_norm = normalizer.transform(X_test['compound'].values.reshape(1, -1))
X_train_compound_norm = X_train_compound_norm.reshape(-1,1)
X_cv_compound_norm =X_cv_compound_norm.reshape(-1,1)
X_test_compound_norm = X_test_compound_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_compound_norm.shape, y_train.shape)
print(X_cv_compound_norm.shape, y_cv.shape)
print(X_test_compound_norm.shape, y_test.shape)
print("="*100)
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['words_in_essay'].values.reshape(1, -1))
X_train_words_in_essay_norm = normalizer.transform(X_train['words_in_essay'].values.reshape(1, -1))
X_cv_words_in_essay_norm = normalizer.transform(X_cv['words_in_essay'].values.reshape(1, -1))
X_test_words_in_essay_norm = normalizer.transform(X_test['words_in_essay'].values.reshape(1, -1))
X_train_words_in_essay_norm = X_train_words_in_essay_norm.reshape(-1,1)
X_cv_words_in_essay_norm = X_cv_words_in_essay_norm.reshape(-1,1)
X_test_words_in_essay_norm = X_test_words_in_essay_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_words_in_essay_norm.shape, y_train.shape)
print(X_cv_words_in_essay_norm.shape, y_cv.shape)
print(X_test_words_in_essay_norm.shape, y_test.shape)
print("="*100)
X_train['words_in_essay'].isnull().values.any()
normalizer = Normalizer()
# normalizer.fit(X_train['price'].values)
# this will rise an error Expected 2D array, got 1D array instead:
# array=[105.22 215.96 96.01 ... 368.98 80.53 709.67].
# Reshape your data either using
# array.reshape(-1, 1) if your data has a single feature
# array.reshape(1, -1) if it contains a single sample.
normalizer.fit(X_train['words_in_title'].values.reshape(1, -1))
X_train_words_in_title_norm = normalizer.transform(X_train['words_in_title'].values.reshape(1, -1))
X_cv_words_in_title_norm = normalizer.transform(X_cv['words_in_title'].values.reshape(1, -1))
X_test_words_in_title_norm = normalizer.transform(X_test['words_in_title'].values.reshape(1, -1))
X_train_words_in_title_norm = X_train_words_in_title_norm.reshape(-1,1)
X_cv_words_in_title_norm = X_cv_words_in_title_norm.reshape(-1,1)
X_test_words_in_title_norm = X_test_words_in_title_norm.reshape(-1,1)
print("After vectorizations")
print(X_train_words_in_title_norm.shape, y_train.shape)
print(X_cv_words_in_title_norm.shape, y_cv.shape)
print(X_test_words_in_title_norm.shape, y_test.shape)
print("="*100)
from collections import Counter
vectorizer = CountVectorizer()
vectorizer.fit(X_train['clean_categories'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_clean_cat_ohe = vectorizer.transform(X_train['clean_categories'].values)
X_cv_clean_cat_ohe = vectorizer.transform(X_cv['clean_categories'].values)
X_test_clean_cat_ohe = vectorizer.transform(X_test['clean_categories'].values)
print("After vectorizations")
print(X_train_clean_cat_ohe.shape, y_train.shape)
print(X_cv_clean_cat_ohe.shape, y_cv.shape)
print(X_test_clean_cat_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
vectorizer = CountVectorizer()
vectorizer.fit(X_train['clean_subcategories'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_clean_sub_ohe = vectorizer.transform(X_train['clean_subcategories'].values)
X_cv_clean_sub_ohe = vectorizer.transform(X_cv['clean_subcategories'].values)
X_test_clean_sub_ohe = vectorizer.transform(X_test['clean_subcategories'].values)
print("After vectorizations")
print(X_train_clean_sub_ohe.shape, y_train.shape)
print(X_cv_clean_sub_ohe.shape, y_cv.shape)
print(X_test_clean_sub_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
# you can do the similar thing with state, teacher_prefix and project_grade_category also
vectorizer = CountVectorizer()
vectorizer.fit(X_train['school_state'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_state_ohe = vectorizer.transform(X_train['school_state'].values)
X_cv_state_ohe = vectorizer.transform(X_cv['school_state'].values)
X_test_state_ohe = vectorizer.transform(X_test['school_state'].values)
print("After vectorizations")
print(X_train_state_ohe.shape, y_train.shape)
print(X_cv_state_ohe.shape, y_cv.shape)
print(X_test_state_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
vectorizer = CountVectorizer()
vectorizer.fit(X_train['teacher_prefix'].fillna(' ').values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_teacher_ohe = vectorizer.transform(X_train['teacher_prefix'].fillna(' ').values)
X_cv_teacher_ohe = vectorizer.transform(X_cv['teacher_prefix'].fillna(' ').values)
X_test_teacher_ohe = vectorizer.transform(X_test['teacher_prefix'].fillna(' ').values)
print("After vectorizations")
print(X_train_teacher_ohe.shape, y_train.shape)
print(X_cv_teacher_ohe.shape, y_cv.shape)
print(X_test_teacher_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
#print("="*100)
vectorizer = CountVectorizer()
vectorizer.fit(X_train['project_grade_category'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_grade_ohe = vectorizer.transform(X_train['project_grade_category'].values)
X_cv_grade_ohe = vectorizer.transform(X_cv['project_grade_category'].values)
X_test_grade_ohe = vectorizer.transform(X_test['project_grade_category'].values)
print("After vectorizations")
print(X_train_grade_ohe.shape, y_train.shape)
print(X_cv_grade_ohe.shape, y_cv.shape)
print(X_test_grade_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
vectorizer = CountVectorizer(min_df=10,ngram_range=(1,2), max_features=5000)
vectorizer.fit(X_train['essay'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_essay_bow = vectorizer.transform(X_train['essay'].values)
X_cv_essay_bow = vectorizer.transform(X_cv['essay'].values)
X_test_essay_bow = vectorizer.transform(X_test['essay'].values)
print("After vectorizations")
print(X_train_essay_bow.shape, y_train.shape)
print(X_cv_essay_bow.shape, y_cv.shape)
print(X_test_essay_bow.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
vectorizer = CountVectorizer()
vectorizer.fit(X_train['project_title'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_title_bow = vectorizer.transform(X_train['project_title'].values)
X_cv_title_bow = vectorizer.transform(X_cv['project_title'].values)
X_test_title_bow = vectorizer.transform(X_test['project_title'].values)
print("After vectorizations")
print(X_train_title_bow.shape, y_train.shape)
print(X_cv_title_bow.shape, y_cv.shape)
print(X_test_title_bow.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df=10,ngram_range=(1,2), max_features=5000)
vectorizer.fit(X_train['essay'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_essay_Tfidf = vectorizer.transform(X_train['essay'].values)
X_cv_essay_Tfidf = vectorizer.transform(X_cv['essay'].values)
X_test_essay_Tfidf = vectorizer.transform(X_test['essay'].values)
print("After vectorizations")
print(X_train_essay_Tfidf.shape, y_train.shape)
print(X_cv_essay_Tfidf.shape, y_cv.shape)
print(X_test_essay_Tfidf.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
# Similarly you can vectorize for title also
vectorizer = TfidfVectorizer()
vectorizer.fit(X_train['project_title'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
X_train_title_Tfidf = vectorizer.transform(X_train['project_title'].values)
X_cv_title_Tfidf = vectorizer.transform(X_cv['project_title'].values)
X_test_title_Tfidf = vectorizer.transform(X_test['project_title'].values)
print("After vectorizations")
print(X_train_title_Tfidf.shape, y_train.shape)
print(X_cv_title_Tfidf.shape, y_cv.shape)
print(X_test_title_Tfidf.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
'''
# Reading glove vectors in python: https://stackoverflow.com/a/38230349/4084039
def loadGloveModel(gloveFile):
print ("Loading Glove Model")
f = open(gloveFile,'r', encoding="utf8")
model = {}
for line in tqdm(f):
splitLine = line.split()
word = splitLine[0]
embedding = np.array([float(val) for val in splitLine[1:]])
model[word] = embedding
print ("Done.",len(model)," words loaded!")
return model
model = loadGloveModel('glove.42B.300d.txt')
# ============================
Output:
Loading Glove Model
1917495it [06:32, 4879.69it/s]
Done. 1917495 words loaded!
# ============================
words = []
for i in preproced_texts:
words.extend(i.split(' '))
for i in preproced_titles:
words.extend(i.split(' '))
print("all the words in the coupus", len(words))
words = set(words)
print("the unique words in the coupus", len(words))
inter_words = set(model.keys()).intersection(words)
print("The number of words that are present in both glove vectors and our coupus", \
len(inter_words),"(",np.round(len(inter_words)/len(words)*100,3),"%)")
words_courpus = {}
words_glove = set(model.keys())
for i in words:
if i in words_glove:
words_courpus[i] = model[i]
print("word 2 vec length", len(words_courpus))
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
import pickle
with open('glove_vectors', 'wb') as f:
pickle.dump(words_courpus, f)
'''
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors_train = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_train['essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_train.append(vector)
print(len(avg_w2v_vectors_train))
print(len(avg_w2v_vectors_train[0]))
print(avg_w2v_vectors_train[0])
avg_w2v_vectors_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_cv.append(vector)
avg_w2v_vectors_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_test['essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_test.append(vector)
# Similarly you can vectorize for title also
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors_titles_train = []; # the avg-w2v for each sentence/review is stored in this list
# for each review/sentence
for sentence in tqdm(X_train['project_title'].values):
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_titles_train.append(vector)
print(len(avg_w2v_vectors_titles_train))
print(len(avg_w2v_vectors_titles_train[0]))
print(avg_w2v_vectors_titles_train[0])
avg_w2v_vectors_titles_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_titles_cv.append(vector)
avg_w2v_vectors_titles_test = []; # the avg-w2v for] each sentence/review is stored in this list
for sentence in tqdm(X_test['project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors_titles_test.append(vector)
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model = TfidfVectorizer()
tfidf_model.fit(X_train['essay'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
tfidf_words = set(tfidf_model.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_train['essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors.append(vector)
print(len(tfidf_w2v_vectors))
print(len(tfidf_w2v_vectors[0]))
tfidf_w2v_vectors_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_cv.append(vector)
tfidf_w2v_vectors_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_test['essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_test.append(vector)
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model_titles = TfidfVectorizer()
tfidf_model_titles.fit(X_train['project_title'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model_titles.get_feature_names(), list(tfidf_model_titles.idf_)))
tfidf_words_titles = set(tfidf_model_titles.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_titles = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_train['project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words_titles):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_titles.append(vector)
print(len(tfidf_w2v_vectors_titles))
print(len(tfidf_w2v_vectors_titles[0]))
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_titles_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_cv['project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words_titles):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_titles_cv.append(vector)
print(len(tfidf_w2v_vectors_titles_cv))
print(len(tfidf_w2v_vectors_titles_cv[0]))
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors_titles_test = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(X_test['project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words_titles):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_titles_test.append(vector)
print(len(tfidf_w2v_vectors_titles_test))
print(len(tfidf_w2v_vectors_titles_test[0]))
Apply Logistic Regression on different kind of featurization as mentioned in the instructions
For Every model that you work on make sure you do the step 2 and step 3 of instrucations
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import accuracy_score
from sklearn.cross_validation import cross_val_score
from collections import Counter
from sklearn.metrics import accuracy_score
from sklearn import cross_validation
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score
# https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html#sklearn.metrics.roc_curve
from sklearn.metrics import roc_curve, auc
from scipy.sparse import hstack
import time
from sklearn.metrics import confusion_matrix
def batch_predict(clf, data):
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_data_pred = []
tr_loop = data.shape[0] - data.shape[0]%1000
# consider you X_tr shape is 49041, then your tr_loop will be 49041 - 49041%1000 = 49000
# in this for loop we will iterate unti the last 1000 multiplier
for i in range(0, tr_loop, 1000):
y_data_pred.extend(clf.predict_proba(data[i:i+1000])[:,1])
# we will be predicting for the last data points
if data.shape[0]%1000 !=0:
y_data_pred.extend(clf.predict_proba(data[tr_loop:])[:,1])
return y_data_pred
def model_performance(X_tr, y_train,X_cr,y_cv):
train_auc = []
cv_auc = []
alpha = [10**-4, 10**-3, 10**-2, 10**-1, 10**0, 10**1, 10**2, 10**3, 10**4]
for i in tqdm(alpha):
SGD = SGDClassifier(loss='log', alpha = i, class_weight = 'balanced')
SGD.fit(X_tr, y_train)
y_train_pred = SGD.predict_proba(X_tr)[:,1]
y_cv_pred = SGD.predict_proba(X_cr)[:,1]
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
train_auc.append(roc_auc_score(y_train,y_train_pred))
cv_auc.append(roc_auc_score(y_cv, y_cv_pred))
plt.semilogx(alpha, train_auc, label='Train AUC')
plt.semilogx(alpha, cv_auc, label='CV AUC')
plt.scatter(alpha, train_auc, label='Train AUC points')
plt.scatter(alpha, cv_auc, label='CV AUC points')
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
def best_parameter_ROC(X_tr, y_train, X_te, y_test, best_alpha):
SGD = SGDClassifier(loss='log', alpha = best_alpha, class_weight = 'balanced')
SGD.fit(X_tr, y_train)
# roc_auc_score(y_true, y_scor, e) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = SGD.predict_proba(X_tr)[:,1]
y_test_pred = SGD.predict_proba(X_te)[:,1]
train_fpr, train_tpr, tr_thresholds = roc_curve(y_train, y_train_pred)
test_fpr, test_tpr, te_thresholds = roc_curve(y_test, y_test_pred)
plt.plot(train_fpr, train_tpr, label="train AUC ="+str(auc(train_fpr, train_tpr)))
plt.plot(test_fpr, test_tpr, label="test AUC ="+str(auc(test_fpr, test_tpr)))
plt.legend()
plt.xlabel("False Positive Rate (fpr)")
plt.ylabel("True Positive Rate (tpr)")
plt.title("ROC")
plt.grid()
plt.show()
return (train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred)
# we are writing our own function for predict, with defined thresould
# we will pick a threshold that will give the least fpr
def find_best_threshold(threshould, fpr, tpr):
t = threshould[np.argmax(tpr*(1-fpr))]
# (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
return t
def predict_with_best_t(proba, threshould):
predictions = []
for i in proba:
if i>=threshould:
predictions.append(1)
else:
predictions.append(0)
return predictions
import seaborn as sns
def print_confusion_matrix(data, title, class_names, figsize = (10,7)):
df_cm = pd.DataFrame(data, columns=class_names, index = class_names)
df_cm.index.name = 'Actual'
df_cm.columns.name = 'Predicted'
plt.rcParams.update({'font.size': 16})
plt.title(title)
sns.set(font_scale=1.4)#for label size
sns.heatmap(df_cm, cmap="Blues", annot=True, annot_kws={"size": 16}, fmt="d")# font size
# merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, \
X_train_words_in_title_norm,X_train_essay_bow, X_train_title_bow, X_train_state_ohe, X_train_teacher_ohe, \
X_train_grade_ohe, X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm, \
X_train_previously_posted_projects_norm )).tocsr()
X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
X_cv_words_in_title_norm,X_cv_essay_bow, X_cv_title_bow, X_cv_state_ohe, X_cv_teacher_ohe, \
X_cv_grade_ohe, X_cv_clean_cat_ohe, X_cv_clean_sub_ohe , \
X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()
X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
X_test_words_in_title_norm,X_test_essay_bow, X_test_title_bow, X_test_state_ohe, X_test_teacher_ohe, \
X_test_grade_ohe, X_test_clean_cat_ohe, X_test_clean_sub_ohe , X_test_price_norm, \
X_test_previously_posted_projects_norm )).tocsr()
print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)
#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)
model_performance(X_tr, y_train,X_cr,y_cv)
#best k using loop
best_alpha_bow_loop = .01
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train, X_te, y_test, best_alpha_bow_loop)
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
X_train_essay_Tfidf, X_train_title_Tfidf, X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, \
X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()
X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
X_cv_words_in_title_norm,X_cv_essay_Tfidf, X_cv_title_Tfidf, X_cv_state_ohe, X_cv_teacher_ohe, X_cv_grade_ohe, \
X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()
X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
X_test_words_in_title_norm,X_test_essay_Tfidf, X_test_title_Tfidf, X_test_state_ohe, X_test_teacher_ohe, \
X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , X_test_price_norm, \
X_test_previously_posted_projects_norm )).tocsr()
print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)
#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)
model_performance(X_tr, y_train,X_cr,y_cv)
best_alpha_tfidf_loop = .0001
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train, X_te, y_test, best_alpha_tfidf_loop)
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
avg_w2v_vectors_train, avg_w2v_vectors_titles_train, X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, \
X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()
X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
X_cv_words_in_title_norm,avg_w2v_vectors_cv, avg_w2v_vectors_titles_cv, X_cv_state_ohe, \
X_cv_teacher_ohe, X_cv_grade_ohe, X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , X_cv_price_norm, \
X_cv_previously_posted_projects_norm )).tocsr()
X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
X_test_words_in_title_norm,avg_w2v_vectors_test, avg_w2v_vectors_titles_test, X_test_state_ohe, \
X_test_teacher_ohe, X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , \
X_test_price_norm,X_test_previously_posted_projects_norm )).tocsr()
print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)
#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)
model_performance(X_tr, y_train,X_cr,y_cv)
best_alpha_w2v_loop = .001
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train, X_te, y_test, best_alpha_w2v_loop)
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
tfidf_w2v_vectors, tfidf_w2v_vectors_titles, X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, \
X_train_clean_cat_ohe , X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()
X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
X_cv_words_in_title_norm,tfidf_w2v_vectors_cv, tfidf_w2v_vectors_titles_cv, X_cv_state_ohe, \
X_cv_teacher_ohe, X_cv_grade_ohe, X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , \
X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()
X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
X_test_words_in_title_norm,tfidf_w2v_vectors_test, tfidf_w2v_vectors_titles_test, X_test_state_ohe, \
X_test_teacher_ohe, X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , \
X_test_price_norm,X_test_previously_posted_projects_norm )).tocsr()
print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)
#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)
model_performance(X_tr, y_train,X_cr,y_cv)
best_alpha_tfidfw2v_loop = .001
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train, X_te, y_test, best_alpha_tfidfw2v_loop)
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])
#No text features.
X_tr = hstack((X_train_neg_norm, X_train_neu_norm, X_train_pos_norm, X_train_compound_norm, X_train_words_in_essay_norm, X_train_words_in_title_norm, \
X_train_state_ohe, X_train_teacher_ohe, X_train_grade_ohe, X_train_clean_cat_ohe , \
X_train_clean_sub_ohe , X_train_price_norm,X_train_previously_posted_projects_norm )).tocsr()
X_cr = hstack((X_cv_neg_norm, X_cv_neu_norm, X_cv_pos_norm, X_cv_compound_norm, X_cv_words_in_essay_norm, \
X_cv_words_in_title_norm,X_cv_state_ohe, X_cv_teacher_ohe, X_cv_grade_ohe, X_cv_clean_cat_ohe , X_cv_clean_sub_ohe , X_cv_price_norm,X_cv_previously_posted_projects_norm )).tocsr()
X_te = hstack((X_test_neg_norm, X_test_neu_norm, X_test_pos_norm, X_test_compound_norm, X_test_words_in_essay_norm, \
X_test_words_in_title_norm,X_test_state_ohe, X_test_teacher_ohe, X_test_grade_ohe, X_test_clean_cat_ohe , X_test_clean_sub_ohe , X_test_price_norm,X_test_previously_posted_projects_norm )).tocsr()
print("Final Data matrix")
print(X_tr.shape, y_train.shape)
print(X_cr.shape, y_cv.shape)
print(X_te.shape, y_test.shape)
print("="*100)
#reset the default parameters for matplotlib
mpl.rcParams.update(inline_rc)
model_performance(X_tr, y_train,X_cr,y_cv)
best_alpha_no_text_loop = .01
train_fpr, train_tpr, tr_thresholds, y_train_pred, y_test_pred = best_parameter_ROC(X_tr, y_train, X_te, y_test, best_alpha_no_text_loop)
print("="*100)
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
data = confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
print_confusion_matrix(data, "Train confusion matrix", [0,1])
data = confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
print_confusion_matrix(data, "Test confusion matrix", [0,1])
#### http://zetcode.com/python/prettytable/
from prettytable import PrettyTable
# Using Loop to determine best Hyperparameters
x = PrettyTable()
x.field_names = ["Vectorizer", "Model", "Hyperparameter", "AUC"]
x.add_row(["BOW", "LR",best_alpha_bow_loop,0.7134])
x.add_row(["TFIDF", "LR",best_alpha_tfidf_loop,0.6945])
x.add_row(["W2V", "LR",best_alpha_w2v_loop,0.6752])
x.add_row(["TFIDFW2V", "LR",best_alpha_tfidfw2v_loop,0.6878])
print(x)
# No text features
x = PrettyTable()
x.field_names = ["Vectorizer", "Model", "Hyperparameter", "AUC"]
x.add_row(["No Text Vectorizer", "LR",best_alpha_no_text_loop, 0.5626])
print(x)
Observations
1) BOW gave the best Test AUC
2) Adding the text vectors definitively improved the performance for all types of vectorizers